Forecasting with Universal Approximators and a Learning Algorithm
نویسندگان
چکیده
منابع مشابه
Universal Value Function Approximators
Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient techni...
متن کاملFuzzy systems with defuzzification are universal approximators
In this paper, we consider a fundamental theoretical question: Is it always possible to design a fuzzy system capable of approximating any real continuous function on a compact set with arbitrary accuracy? Moreover, we research whether the answer to the above question is positive when we restrict to a fixed (but arbitrary) type of fuzzy reasoning and to a subclass of fuzzy relations. This resul...
متن کاملUniversal ε-approximators for integrals
Let X be a space and F a family of 0, 1-valued functions on X. Vapnik and Chervonenkis showed that if F is “simple” (finite VC dimension), then for every probability measure μ on X and ε > 0 there is a finite set S such that for all f ∈ F , ∑ x∈S f(x)/|S| = [ ∫ f(x)dμ(x)]± ε. Think of S as a “universal ε-approximator” for integration in F . S can actually be obtained w.h.p. just by sampling a f...
متن کاملUncertain Systems are Universal Approximators
Uncertain inference is a process of deriving consequences from uncertain knowledge or evidences via the tool of conditional uncertain set. Based on uncertain inference, uncertain system is a function from its inputs to outputs. This paper proves that uncertain systems are universal approximators, which means that uncertain systems are capable of approximating any continuous function on a compac...
متن کاملA Learning Rule for Universal Approximators with a Single Non-Linearity
A learning algorithm is presented for circuits consisting of a single soft winner-take-all or k-winner-take-all gate applied to linear sums. We show that for these circuits gradient descent with regard to a suitable error function does not run into the familiar credit-assignment-problem. Furthermore, in contrast to backprop for multi-layer perceptrons this learning algorithm does not require th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Time Series Econometrics
سال: 2011
ISSN: 1941-1928
DOI: 10.2202/1941-1928.1084